6 research outputs found

    Breaking the PayPal HIP: A Comparison of classifiers

    Get PDF
    Human Interactive Proofs (HIPs) are a method used to differentiate between humans and machines on the internet. Providers of online services such as PayPal.com use HIPs to prevent automated signups and abuse of their services. In this experiment, a three step algorithm has been developed to break the PayPal.com HIP. The image is preprocessed to remove noise using thresholding and a simple cleaning technique, and then segmented using vertical projections and candidate split positions. Four classification methods have been implemented: pixel counting, vertical projections, horizontal projections and template correlations. The system was trained on a sample of twenty PayPal.com HIPs to create thirty-six training templates (one for each character: 0-9 and A-Z). A sample of 100 PayPal.com HIPs were used for testing. The following HIP success rates have been achieved using the different classifiers: 8% pixel counting, vertical projections 97%, horizontal projections 100%, template correlations 100%. Three of the classifiers out perform the 88% HIP success rate of [6]

    Evaluating the usability and security of a video CAPTCHA

    Get PDF
    A CAPTCHA is a variation of the Turing test, in which a challenge is used to distinguish humans from computers (`bots\u27) on the internet. They are commonly used to prevent the abuse of online services. CAPTCHAs discriminate using hard articial intelligence problems: the most common type requires a user to transcribe distorted characters displayed within a noisy image. Unfortunately, many users and them frustrating and break rates as high as 60% have been reported (for Microsoft\u27s Hotmail). We present a new CAPTCHA in which users provide three words (`tags\u27) that describe a video. A challenge is passed if a user\u27s tag belongs to a set of automatically generated ground-truth tags. In an experiment, we were able to increase human pass rates for our video CAPTCHAs from 69.7% to 90.2% (184 participants over 20 videos). Under the same conditions, the pass rate for an attack submitting the three most frequent tags (estimated over 86,368 videos) remained nearly constant (5% over the 20 videos, roughly 12.9% over a separate sample of 5146 videos). Challenge videos were taken from YouTube.com. For each video, 90 tags were added from related videos to the ground-truth set; security was maintained by pruning all tags with a frequency 0.6%. Tag stemming and approximate matching were also used to increase human pass rates. Only 20.1% of participants preferred text-based CAPTCHAs, while 58.2% preferred our video-based alternative. Finally, we demonstrate how our technique for extending the ground truth tags allows for different usability/security trade-offs, and discuss how it can be applied to other types of CAPTCHAs

    Video CAPTCHAs: Usability vs. Security

    Get PDF
    A Completely Automated Public Turing test to tell Computer and Humans Apart (CAPTCHA) is a variation of the Turing test, in which a challenge is used to distinguish humans from computers (‘bots’) on the internet. They are commonly used to prevent the abuse of online services; for example, malicious users have written automated programs that sign up for thousands of free email accounts and send SPAM messages. A number of hard artificial intelligence problems, including natural language processing, speech recognition, character recognition, and image understanding, have been used as the basis for these challenges on the expectation that humans will outperform bots. The most common type of CAPTCHA requires a user to transcribe distorted characters displayed within a noisy image. Unfortunately, many users find CAPTCHAs based on character-recognition frustrating and attack success rates as high as 60% have been reported for Microsoft’s Hotmail CAPTCHA [8].To address these problems, we present a first attempt at using content-based video labeling (‘tagging’) as a the basis for a CAPTCHA
    corecore